When Mental States Matter, When They Don’t, and What That Means for Morality
نویسندگان
چکیده
Research has shown that moral judgments depend on the capacity to engage in mental state reasoning. In this article, we will first review behavioral and neural evidence for the role of mental states (e.g., people’s beliefs, desires, intentions) in judgments of right and wrong. Second, we will consider cases where mental states appear at first to matter less (i.e., when people assign moral blame for accidents and when explicit information about mental states is missing). Third, we will consider cases where mental states, in fact, matter less, specifically, in cases of “purity” violations (e.g., committing incest, consuming taboo foods). We will discuss how and why mental states do not matter equivalently across the multi-dimensional space of morality. In the fourth section of this article, we will elaborate on the possibility that norms against harmful actions and norms against “impure” actions serve distinct functions – for regulating interpersonal interactions (i.e., harm) versus for protecting the self (i.e., purity). In the fifth and final section, we will speculate on possible differences in how we represent and reason about other people’s mental states versus our own beliefs and intentions. In addressing these issues, we aim to provide insight into the complex structure and distinct functions of mental state reasoning and moral cognition. We conclude that mental state reasoning allows us to make sense of other moral agents in order to understand their past actions, to predict their future behavior, and to evaluate them as potential friends or foes. Many of us find morality interesting because reasonable people – ordinary folks and professional philosophers alike – disagree about what’s right and wrong. Yet it is precisely this disagreement that presents a challenge for students of moral psychology. If we wish to understand the moral mind, whose mind should we study? One solution is to tackle as manyminds as we can find and not simply undergraduate minds sitting in introductory psychology classes (a current source of many research participants; Henrich, Heine, & Norenzayan, 2010). A complementary approach, though, is to identify universal moral rules (e.g., DeScioli, Asao, & Kurzban, 2012; Hauser, 2006; Mikhail, 2007) that apply to all moral judgments, independent of the culturespecific content of culture-specific moral codes. One candidate rule (i.e. “rule of intent”) is this: intentionalwrongdoings are morally worse than accidentalwrongdoings. For example, if we were to discover that aliens on another planet believe certain seemingly arbitrary actions to bemorally wrong – touching elbows, making eye contact, and mixing salt with water – we might infer that, within the alien culture, these “crimes” would be perceived as even more wrong when committed on purpose versus by accident. In our own culture, intent makes the difference between murder and manslaughter (Hart & Honore, 1959). As such, mental states, including people’s intentions, beliefs, and desires, may represent a cognitive constant in the messiness of morality. In this article, we will first present behavioral and neural evidence for the role of mental states in moral judgments of right and wrong. Second, we will consider cases where mental states appear at first to matter less (i.e., when people assign moral blame for accidents and © 2013 John Wiley & Sons Ltd 586 When Mental States Matter for Morality when explicit information about mental states is missing). Third, we will consider cases where mental states, in fact, matter less, specifically, in cases of “purity” violations (e.g., committing incest, consuming taboo foods). We will discuss how and why mental states do not matter equivalently across the space of morality. In the fourth section of this article, we will elaborate on the possibility that norms against harmful actions and norms against “impure” actions serve distinct functions – for regulating interpersonal interactions (i.e., harm) versus for protecting the self (i.e., purity). In the fifth and final section, we will speculate on possible differences in how we represent and reason about other people’s mental states versus our own beliefs and intentions. In addressing these issues, we aim to provide insight into the complex structure and distinct functions of mental state reasoning and moral cognition. For Moral Judgment, It’s the Thought that Counts Folk intuition and the law converge on the notion that murder is worse than manslaughter, as mentioned above. In this first section, we will review behavioral and neural evidence for the more general hypothesis that mental states matter for moral judgments – when judging innocent and malicious intentions. We will also review how reduced or impaired mental state reasoning influences moral judgment. Importantly, empirical work supports a role for intent not only when lives are at stake but also across diverse contexts (Cushman, 2008; Decety, Michalska, & Kinzler, 2012; Killen, Mulvey, Richardson, Jampol, & Woodward, 2011; Young, Cushman, Hauser, & Saxe, 2007). For example, allocating resources selfishly or unfairly is seen as wrong, but doing so on purpose is seen as worse than doing so by accident (Cushman, Dreber, Wang, & Costa, 2009). In addition, information about mental states informs not simply moral judgments of individuals but also moral judgments of groups (e.g., corporations, unions, political parties; Waytz & Young, 2012). Put plainly, innocent intentions in the case of accidents serve to mitigate blame. Meanwhile, malicious intentions can lead to assignments of blame even in the absence of actual harm, as in failed murder attempts. Indeed, unsavory desires on their own are often enough to evoke blame even when those desires are causally disconnected from the harmful event (e.g., a man who is forced at gunpoint by attackers to kill his wife’s lover, whom he wants dead anyway; Woolfolk, Doris, & Darley, 2006). We even assign moral blame for harmful desires when no attempt at harm occurs (Inbar, Pizarro, & Cushman, 2012). For example, we blame agents who seek to benefit from others’ misfortune even when the unfortunate events are out of those agents’ control (e.g., traders who profited from but did not cause or even attempt to cause the subprime mortgage crisis; Inbar, Pizarro & Cushman, 2012; see also Pizarro, Uhlmann, & Bloom, 2003). These findings reveal that information about intentions and desires often dominate our moral judgments. It is the failure to process emotionally salient intentions (e.g., murderous desires) that results in abnormally lenient judgments of failed attempts to harm (including failed murder attempts), as in the case of patients with focal lesions to the ventral sub-region of the medial pre-frontal cortex (VMPFC), an area implicated in social–emotional processing (Young et al., 2010). Recent work has zeroed in on the neural mechanisms supporting mental state reasoning for moral judgments (for reviews, see Dungan & Young, 2011; Young, 2013). This work builds directly on prior (and ongoing) work on the neural basis of social cognition. Indeed, a number of studies using functional magnetic resonance imaging (fMRI) have revealed a consistent neural network for social cognition, including sub-regions of the medial prefrontal cortex (MPFC), right and left temporo-parietal junction (RTPJ and LTPJ), and precuneus (Fletcher et al., 1995; Gallagher et al., 2000; Gobbini et al., 2007; Saxe & © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 587 Kanwisher, 2003). A recent activation likelihood estimation (ALE) meta-analysis of 135 studies showed overlapping neural networks for social cognition (e.g., mental state reasoning) and moral cognition (Bzdok et al., 2012). Much of our own work has targeted the role of one key region, the RTPJ (Perner et al., 2006; Saxe & Kanwisher, 2003), for moral cognition (Young et al., 2007). We note that other work indicates additional roles for the RTPJ, for example, in prosocial behavior, but our focus here will be on moral judgment (Morishima, Schunk, Bruhin, Ruff, & Fehr, 2012; see also Carter, Bowling, Reeck, & Huettel, 2012). Recent evidence reveals a role for the RTPJ in supporting mental state reasoning for moral judgment. The RTPJ supports the initial encoding of the mental state and its eventual integration with other task-relevant information (e.g., outcome information) for moral judgment (see Figure 1; Young & Saxe, 2008). At the time of integration, the magnitude of response in the RTPJ is correlated with moral judgment; participants with a higher RTPJ response to accidental harms, on average, deliver more forgiveness and less blame for accidents, compared to participants with a lower RTPJ response (Young & Saxe, 2009). More recently, the use of multi-voxel pattern analysis (MVPA) in fMRI research has allowed us to determine whether the spatial pattern of neural activity across voxels within the RTPJ (as well as other brain regions) differentiates between intentional harms and accidental harms. In other words, does the RTPJ support this computation, and, if so, how? We found that even though the magnitude of neural response (averaged across voxels) in the RTPJ is high for intentional and accidental harms, MVPA reveals that the voxel-wise pattern in the RTPJ distinguishes between intentional and accidental harms (Koster-Hale, Saxe, Dungan, & Young, 2013); also, individual differences in neural discriminability correlate with individual differences in behavioral discriminability – the extent to which participants distinguish between intentional and accidental harms in their moral judgments. In addition, research using high-density event-related potentials (ERPs) reveals differentiation in the RTPJ between intentional and accidental harms as fast as 62ms post-stimulus while participants Figure 1 The right temporo-parietal junction (RTPJ) is a key brain region for processing mental states such as people’s beliefs and intentions (e.g., “she wanted to poison him”, left; “she thought it sugar”, right) during moral cognition (e.g., judgments of attempted harms, left; accidental harms, right). © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 588 When Mental States Matter for Morality view morally relevant visual stimuli (Decety & Cacioppo, 2012). Convergent behavioral evidence suggests that intent represents an early input to moral judgment (Malle & Holbrook, 2012). Finally, temporarily disrupting activity in the RTPJ using transcranial magnetic stimulation (TMS) leads to more outcome-based (i.e., fewer intent-based) moral judgments (Young, Camprodon, Hauser, Pascual-Leone, & Saxe, 2010). Other work has targeted the role of mental state reasoning in moral cognition across the life span. As children develop the capacity to reason fully and flexibly about mental states, including false beliefs, they are more likely to forgive an accidental agent, e.g., someone who throws away a classmate’s cupcake, which is hidden in a brown paper bag that looks like trash (Killen et al., 2011). However, younger children between the ages of three and four years focus more on the bad outcome (e.g., cupcake in the trash) and less on the lack of negative intent (e.g., she didn’t mean any harm, she didn’t know it was a cupcake, she thought it was trash) and therefore assign more blame for an accident (see also Cushman, Sheketoff, Wharton, & Carey, 2013; Hebble, 1971; Piaget, 1965; Shultz, Wright, & Schleifer, 1986; Yuill & Perner, 1988; Yuill, 1984; Zelazo, Helwig, & Lau, 1996). At the other end of the age spectrum, older adults (mean of 71.8 years) also show increased reliance on bad outcomes (Moran, Jolly, & Mitchell, 2012), associated with reduced activity in the dorsal sub-region of the MPFC, another node in the neural network for social cognition and mental state reasoning. Outcome-based judgments may therefore be described by a U-shaped curve, appearing most robustly at either end of the age spectrum; on one end are young children who have not yet developed mature mental state reasoning capacities, and on the other end are older adults who show a decline in mental state reasoning and associated neural activity. A similar focus on outcomes versus intentions emerges in the case of autism, a neurodevelopmental disorder associated with impairments in social cognition, including specific deficits in mental state reasoning (Moran et al., 2011). Like young children and older adults, high-functioning individuals with autism also judge accidents more harshly on the basis of the bad outcome rather than the neutral intent. This behavioral pattern in highfunctioning autism is also consistent with a recent fMRI study using the approach of MVPA (described above): in participants with autism, the spatial pattern of neural activity in the RTPJ does not reliably distinguish between intentional and accidental harms (Koster-Hale et al., 2013). We suggest that accidents pose a particular challenge because they pit salient information about a bad outcome against relatively neutral information about a false belief or lack of knowledge or intent. Forgiving an accident may require overriding a pre-potent response to an emotionally salient outcome from a lost cupcake to a lost life (cf. Miller et al., 2010). As we discuss further in the next section, forgiving an accident may pose a challenge not simply for younger and older populations and individuals with autism but for everyone else as well. According to a recent study, the individuals who are able to ignore accidental outcomes (e.g., salient harms) in favor of “hyper-rational”, intent-based judgments are those with a clinical diagnosis of psychopathy (Young et al., 2012). Other studies reveal individual differences in the tendency to use mental state information for exculpation (Cohen & Rozin, 2001; Young & Saxe, 2009). In one study, participants were instructed to weigh “mitigating circumstances” (relevant for the sentencing of guilty defendants); regions for mental state reasoning (including the TPJ, dorsal MPFC, and precuneus) and for emotional empathy (including the right middle insula) were recruited, and individual differences in the inclination to mitigate were correlated with activity in the latter set of regions (Yamada et al., 2012). The ability to integrate cognitive inputs for moral judgment appears to depend also on functional connectivity between areas for social cognition and emotional processing, i.e., ventral MPFC and the amygdala (Decety et al., 2012). These results highlight the multiple mechanisms that © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 589 may interact and compete during moral cognition (i.e., outcome processing versus mental state processing; Cushman, 2008; Greene et al., 2001; Greene et al., 2004; Young et al., 2007). In sum, mental states matter for moral judgments of harmful actions – from poisoning other people to tossing other people’s cupcakes in the trash. Considering innocent intentions can lead people to assign less blame for accidents, and considering guilty intentions can lead to moral condemnation even in the absence of a harmful outcome. The Thought Counts Even More than You Think Given the evidence above, the simple point that intentions matter for moral judgment may seem exceedingly uncontroversial. In this section, we will therefore consider two cases in which intentions may appear, upon initial consideration, to count less – first, when we assign moral blame for accidents in spite of the actor’s innocent intentions and, second, when intent information isn’t available. In brief, it would seem (at first) that (1) when observers blame people for causing accidents, they do so on the basis on the bad outcomes that are brought about in spite of benign intentions or false beliefs, and (2) in the absence of mental state information, observers would be forced to rely on other information, such as readily observable, external actions and outcomes. Do these two cases count as “exceptions” to the rule of intent? We show, on the contrary, that moral judgments in both of these cases nevertheless depend on inferences about unobservable, internal mental states. As mentioned at the end of the first section above, many of us are familiar with the struggle of forgiveness. Like young children, older adults, and individuals with autism, we may find it difficult, on occasion, to forgive even in the face of obviously unintentional harm. Imagine a friend who spills ink on your white carpet (e.g., she thought the container held pencils), a colleague who emails you a debilitating virus (e.g., he believed his own computer to be fixed), or a neighbor who feeds your seriously peanut-allergic child a PB&J sandwich (e.g., she heard you say “almonds”). Whatever blame we cannot help but assign seems due in large part (if not in full) to the bad outcome ranging from annoying to deadly (e.g., the ruined carpet, the corrupted hard drive, the dead child). Indeed, outcomes play an important role especially in judgments of blame and punishment, as opposed to evaluations of an action as wrong or permissible (Cushman, 2008; Cushman et al., 2009) or assessments of personal moral character (e.g., Critcher, Inbar, & Pizarro, 2012; Pizarro & Tannenbaum, 2011; Tannenbaum, Uhlmann, & Diermeier, 2011). We suggest, though, that while outcomes contribute to some of the blame we assign for accidents, mental state factors contribute as well and perhaps even more than outcomes (Young, Nichols, & Saxe, 2010). The case of accidents is the first of the two “exceptions” introduced at the start of this section. Consider the last example – death by PB&J. Suppose that your neighbor, Mrs. Smith, holds a false belief about your child’s allergy – she thinks, falsely, that your child is allergic to almonds, not peanuts – and she accidentally kills your child. Imagine a second, happier version of this story in which Mrs. Smith holds a true belief about your child’s allergy and therefore causes no harm at all. Surely the most salient difference between these two versions is the presence of harm in one case and the absence of harm in the other. However, a more subtle but, as we will argue, a more important difference is the status of the belief – whether it is true or false. Does this difference matter? Does it matter more? Imagine now a third “hybrid” version of the story in which Mrs. Smith holds a false belief – she thinks, falsely, that your child is allergic to almonds, not peanuts; she prepares a PB&J sandwich for your child, but luckily it gets eaten by her husband, so – hungry child aside – no harm is done. Participants in a recent study judgeMrs. Smith in this third “hybrid” version (false belief, no harm) to be nearly as blameworthy as she is in the first version (false belief, extreme harm); and both cases © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 590 When Mental States Matter for Morality are judged much worse than the happy version (true belief, no harm; Young et al., 2010). Why would this difference matter? It turns out that participants assess false beliefs as unjustified or unreasonable; these judgments of negligence drive judgments of moral blame. In other words, we may be harsh on accidents not simply because of the bad outcome but also because of mental state assessments (e.g., Mrs. Smith should have known better!). Now we turn to the second of the two “exceptions”. What happens when information about mental states isn’t accessible (i.e., when we don’t know whether beliefs are true or false, justified or unjustified; whether intentions are innocent or malicious)? Are we forced to rely on alternate routes to moral evaluations? In other words, if we lack direct access to mental state information, we may base our judgments instead on other facts, namely, facts about the actions and the outcomes. Notably, actions and outcomes are often observable, unlike mental states, which are invisible and “hidden” inside people’s heads. On the contrary, recent evidence reveals spontaneous mental state reasoning for moral judgment even when explicit information about mental states is not provided. For example, the RTPJ, a key brain area for mental state reasoning, as discussed above, is selectively recruited for morally relevant facts versus morally irrelevant facts about an action (Young & Saxe, 2009). After hearing about a person who puts a powdery substance into someone’s coffee, participants are told either that the powder is poisonous or safe (morally relevant) or that the powder fills the container it is in (morally irrelevant). This differential neural activation (i.e., more RTPJ activity for morally relevant information) suggests that moral judgments depend on spontaneous mental state inference even in the absence of explicit mental state information. Participants may be motivated to consider what moral agents may be thinking (e.g., did she think it was sugar?) and whether they know what they are doing (e.g., did she know she was poisoning her friend?). Of course, whether RTPJ activation in this instance reflects specific answers to these questions or the mere effort to answer them is worth exploring, which we do next. How might participants infer mental states in the absence of explicit mental states? A growing body of literature suggests that information about an actor’s moral character, or prior record, may inform assessments of his or her harmful (or helpful) actions as intentional or not (Alicke, 2000, 2008; Knobe, 2005, 2010). Recent fMRI studies have combined behavioral and neural approaches to examine mental state inferences in moral contexts. In one fMRI study, participants were set up to interact with “other players” in an economic game, where these “players” behaved fairly or unfairly (Kliemann et al., 2008). After these “interactions”, participants read, in the scanner, a series of stories presented as “written by the players” about the players’ past actions of ambiguous intent (e.g., broke roommate’s lamp, spilled sister’s nail polish, shrunk friend’s sweater). Harmful actions performed by previously unfair players were judged as more blameworthy and more intentional than the same actions performed by previously fair players, across participants. Notably, these judgments were also associated with increased RTPJ activity, reflecting participants’ inference of blameworthy intent based on negative prior record (e.g., whether they had been unfair to the participants in the economic game). Additional fMRI studies have uncovered broadly similar patterns in which increased RTPJ activity reflects inferences of negligence (Young et al., 2010b), negative intent, or even the lack of positive intent (Young, Scholz, & Saxe, 2011). In other words, mental state inferences and assessments, supported by the RTPJ, may depend in part on “background” information about agents’ past behavior and moral character. Together, these findings suggest that (1) considering false beliefs as unreasonable or unjustified can lead people to assign more blame for accidents, and (2) in the absence of explicit mental state information, people make spontaneous inferences about beliefs and intentions for moral evaluation. Thus, mental states matter even in cases where they might © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 591 have appeared at first to matter less. By contrast, the next sections focus on cases where mental states, in fact, matter less. Mental States Matter More for Harmful Actions, Less for Purity Violations Mental state factors (e.g., whether intentions are good or bad, whether beliefs are true or false, justified or unjustified) represent robust inputs to moral judgments of harmful actions, as revealed in the first two sections above. The aim of this next section is to investigate whether mental states matter equivalently across distinct types of moral actions or distinct moral domains (Graham et al., 2011; Graham, Haidt, & Nosek, 2009; Haidt, 2007). Recent neuroimaging evidence using voxel-based morphometry (VBM) suggests that values across distinct moral domains (e.g., harm, purity) are associated with distinct neurological bases; in particular, individual differences in regional brain volume reflect variation in sensitivity to individualizing values (e.g., harm) versus binding values (e.g., purity; Lewis et al., 2012). Thus, we will consider not simply harmful actions (as in the previous sections) but also actions that violate purity norms (e.g., consensual incest, eating taboo foods) – actions that don’t necessarily cause harm but that nevertheless appear to defile our moral selves or reflect poorly on our moral character (Inbar, Pizarro, & Cushman, 2012; Tannenbaum et al., 2011). Extensive work linking distinct moral emotions to distinct moral domains reveals that harmful actions elicit anger, whereas purity violations (including “taboo” behaviors related to food and sex) elicit disgust (e.g., Horberg, Oveis, Keltner, & Cohen, 2009; Rozin, Lowery, Imada, & Haidt, 1999; Russell & Giner-Sorolla, 2011a, 2013; Russell, Piazza, & Giner-Sorolla, 2013; but see Salerno & Peter-Hagene, in press, for evidence on the interactive effect of anger and disgust on moral outrage). For example, in one emotion induction study, anger-eliciting sounds (“noise music”) led uniquely to harsher moral judgments of harm violations (e.g., “crimes against persons”), whereas disgust-eliciting sounds (the sound of an emetic event, vomiting) led uniquely to harsher moral judgments of purity violations (e.g., “crimes against nature”; Seidel & Prinz, 2013). In addition, anger reactions are flexibly influenced by contextual cues and social justifications, whereas disgust reactions are largely immune to these factors (Russell & Giner-Sorolla, 2011b, 2011c; Russell & Giner-Sorolla, 2013). For example, anger is modulated by information about intent (i.e., whether the harm was intentional or accidental), whereas disgust is modulated solely by the presence or absence of an impure action (Russell & Giner-Sorolla, 2011a). Specifically, intentional harms elicit more anger (but not more disgust) than accidents, whereas taboo actions elicit more disgust than non-taboo actions (see Figure 2). An outstanding question is this: do moral judgments of harmful actions depend more on intent than moral judgments of purity violations? To test the specific prediction that the “rule of intent” applies differently to distinct moral domains (harm versus purity), we presented participants with a series of intentional and accidental harms, similar to the ones described in the first section, as well as intentional and accidental purity violations (Young & Saxe, 2011). In one example of an accidental harm, participants were asked to imagine preparing a person’s dish with peanuts without knowing about the person’s peanut allergy. In one example of an accidental purity violation (e.g., incest), participants were asked to imagine sleeping with someone who turned out, the next day, to be a long-lost sibling. As predicted, we found that for a series of violations that were judged equally harshly across domains (harm and purity), participants perceived a large moral difference between intentional and accidental harms and a relatively small (but still statistically significant) moral difference between intentional and accidental purity violations (represented by bodily violations related to food and sex; Chapman & Anderson, 2013; Russell & Giner-Sorolla, 2013; Tybur, Lieberman, Kurzban, & DeScioli, 2013). In a separate © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 Figure 2 Distinct moral norms serve distinct functions for regulating interpersonal interactions (left) versus for protecting the self (right): it is wrong to harm others, and it is wrong to defile the self. Differences concern content (harm versus purity), emotional signatures (anger/guilt versus disgust/shame), function (other-focus versus self-focus), and reliance on mental state information such as intent. 592 When Mental States Matter for Morality experiment, not only were accidental harms judged less harshly (based on false beliefs and innocent intentions) than accidental purity violations such as accidental incest, but failed attempts to harm others were judged more harshly (based on false beliefs and guilty intentions) than failed attempts to commit incest (e.g., imagine sleeping with someone you believe to be your long-lost sibling only to discover, after the fact, that you’re not actually related). In a final twist, we presented participants with different kinds of failed attempts – some based on false beliefs (as in the scenarios above) and others based on true beliefs but that involved otherwise thwarted actions (Young & Saxe, 2011). Imagine the following attempted harm: you know your nemesis is allergic to peanuts, and you decide to sneak a few peanuts into her lunch; however, you have misplaced your peanuts. Now imagine the following attempt at incest: you know your one-night stand is your long-lost sibling, and you decide to sleep together; however, the fire alarm sounds, and the opportunity passes. If harmful intent is what matters most for judging attempted harms, then participants should not distinguish between different types of attempts (e.g., whether they involve false beliefs or true beliefs), and indeed they did not. By contrast, participants did distinguish between the two types of attempted incest. Nearly sleeping with one’s true sibling was judged to be much worse than actually sleeping with a person falsely believed to be a sibling. In other words, moral judgments of (attempted) incest were based more on features of the (attempted) act, determined by the facts of the world (e.g., whether they were actually related), and less on the contents of the actor’s mind (e.g., whether they thought they were related). Consistent with these behavioral findings, fMRI evidence suggests that the RTPJ (discussed above as a brain area for mental state reasoning) is recruited more robustly for evaluating harms versus purity violations (Young et al., in prep). In addition, although the magnitude of response (averaged across voxels) in the RTPJ is high for intentional and accidental harms, multi-voxel pattern analyses (MVPA) reveal that the spatial pattern of neural activity across voxels within the RTPJ holds information about intent – but only for harmful actions and not for purity violations (see also Koster-Hale et al., 2013). In other words, the voxelwise pattern in the RTPJ distinguishes between intentional and accidental harms but not between intentional and accidental purity violations. Together, the neural and behavioral findings suggest that harmful actions elicit greater attention to agents’ mental states compared to purity violations, and judgments of harmful (versus impure) actions rely more on specific computations for discriminating between © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 593 intentional and accidental acts, supported by neural substrates for mental state reasoning. In the next section, we will consider a theoretical account for the different role of mental states in the harm domain versus the purity domain. Distinct Moral Norms for You and Me The previous section suggests that mental states matter more for judgments of harmful actions versus purity violations. In this section, we propose an adaptive account for this cognitive difference. We propose that distinct moral norms serve distinct functions – for regulating interpersonal relationships versus for protecting the self. In other words, (1) it is wrong to harm others, and (2) it is wrong to defile the self. We also present preliminary evidence for the link between Harm and Other-focus and Purity and Self-focus (“HOPS” model; see Figure 2). As such, the HOPS account falls in line with other accounts that motivate drawing distinctions between moral norms by appealing to distinct evolved functions (e.g., Haidt & Joseph, 2004; see also Russell & Giner-Sorolla, 2013; Tybur et al., 2013). Why might intent matter more for judgments of harms? One possibility is that moral norms against causing harm function as moral norms against causing harm to others. More specifically, the proposal is that harm norms (e.g., don’t hurt others) serve to limit our negative impact on each other. Indeed, paradigmatic cases of harm (physical or psychological) feature at least one agent (the violator) who harms at least one patient (the victim; Gray &Wegner, 2009, 2012; Gray, Young, & Waytz, 2012; Young & Phillips, 2011). The victim may demand an explanation from the violator, and the violator might appeal to her innocent intent (e.g., “I didn’t mean to do it”). As far as typical harms are concerned, it takes (at least) two (Gray & Wegner, 2012). What does interpersonalmorality have to do with intent? Information about intent supports not only explanations and evaluations of other people’s past actions but also reliable predictions of their future behavior. Especially in the case of accidents, only knowing a person’s true intentions can afford an accurate identification of “friend or foe”. Mental state reasoning is thus crucial for social interaction and moral judgment (for a review, see Young &Waytz, in press). In particular, moral judgments of harms (which tend to be interpersonal) rely on mental state reasoning. Why might intent matter less for judgments of purity violations? Moral norms against sleeping with blood relatives, eating taboo foods, or touching taboo substances may have evolved as a means for us to protect ourselves, for our own good, from possible contamination. In particular, researchers have proposed that our reactions of disgust, elicited by purity violations, evolved initially for pathogen avoidance and food rejection (e.g., Chapman & Anderson, 2013; Chapman, Kim, Susskind, & Anderson, 2009; Rozin, Haidt, & Fincher, 2009; Rozin, Haidt, & McCauley, 2008; Russell & Giner-Sorolla, 2013; Tybur et al., 2013; but see Rottman & Blake, submitted; Royzman & Kurzban, 2011). When we worry about negatively impacting ourselves, we may care less about whether the impact is accidental or intentional; the key for ourselves is to avoid the contamination, the unsavory outcome. Indeed, purity violations like consensual incest or eating taboo foods (e.g., horse meat, dog meat, rat meat, depending on the culture) are often deemed morally offensive even in the absence of any victims (e.g., see Haidt, 2001; Haidt, Koller, & Dias, 1993). Often, such “impure” acts directly affect only the participating parties and not third parties. When we think about engaging in these acts ourselves even by accident, they may nevertheless feel wrong. Recent work on moral emotions, discussed briefly in the previous section, is consistent with the HOPS (harm-other, purity-self) proposal. Anger (associated with harm) is elicited more not only for intentional versus accidental harms but also for harms that affect another person versus one’s own self (Gutierrez & Giner-Sorolla, 2007; Russell & Giner-Sorolla, © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 594 When Mental States Matter for Morality 2011a). By contrast, disgust (associated with impurity) is modulated not by intent (intentional versus accidental) or target (self versus other) but by the presence or absence of moral impurity (e.g., taboo behavior). Other work reveals, more generally, the inflexible nature of disgust compared to anger (Hutcherson & Gross, 2011; Russell & Giner-Sorolla, 2011b, Russell & Giner-Sorolla, 2011c). People are able to mitigate their own reactions of anger but not disgust by imagining “different circumstances” (Russell & Giner-Sorolla, 2011b); people are able to provide good reasons in response to an elicitor (e.g., pedophilia) for their anger (“Pedophiles violate other people’s human rights”) but not disgust (“Pedophiles are gross”; Russell & Giner-Sorolla, 2011b); and people would rather evoke anger in others than disgust (Hutcherson & Gross, 2011). Anger, elicited by interpersonal harms, motivates people to take (interpersonal) action – to punish the offenders or to repair relationships or both; by contrast, disgust, elicited by purity violations, typically motivates simple avoidance – increasing distance between the self and the stimulus (Gutierrez & Giner-Sorolla, 2007; but see Widen & Russell, 2013). Similar patterns have been observed for the self-conscious emotions of guilt and shame, which may arise in the agents who violate harm and purity norms in response to their own actions (Giner-Sorolla & Espinosa, 2011; for a recent review of guilt and shame, see Cohen, Panter, & Turan, 2012). Recent work indicates that individuals may also experience self-conscious emotions (guilt, shame) in response to other-directed emotions (anger, disgust) that typically arise in observers who are judging the actions: guilt emerges in response to others’ anger and shame in response to others’ disgust (Giner-Sorolla & Espinosa, 2011). Guilt, an “approach” emotion, motivates relationship repair, whereas shame (like disgust) motivates avoidance (Sheikh & Janoff-Bulman, 2010). Research on guilt and shame also supports a possible dissociation between self-directed versus other-directed violations. For example, failures in self-restraint or self-control (e.g., body-related transgressions including excessive eating, spending, gambling) typically elicit shame, whereas failures to be prosocial (e.g., failures to help or to care for others) often lead to guilt (Sheikh & Janoff-Bulman, 2010). Thus, research on guilt and shame (as well as anger and disgust) may reveal links between the HOPS model and models of moral cognition that highlight both an approach/ avoid (i.e., prescriptive/proscriptive) dimension as well as a self/other dimension (JanoffBulman, Sheikh, & Hepp, 2009; Janoff-Bulman, 2011). Although here we focus on the proscriptive aspects of our model – avoiding harming others and avoiding defiling the self – future exploration should target other areas of this multi-dimensional space. For example, it would be worthwhile to know whether, in moral contexts, the approach dimension applies more easily to other-oriented actions, linked by guilt and anger, while the avoid dimension applies primarily to self-directed actions, linked by shame and disgust (but see discussion in the final section below on helping the future self). Our own preliminary evidence supports the link between harmful actions (associated with anger, guilt) and other-focus, on the one hand, and impure or defiling actions (associated with disgust, shame) and self-focus, on the other hand. Suppose you are forced by an experimenter to choose between two options: (1) harm: deliver a mild electric shock to another person, and (2) defile: expose another person to a bad smell using fart spray, a popular technology among moral psychologists studying disgust (e.g., Inbar, Pizarro, & Bloom, 2012; Rottman & Kelemen, 2012; Schnall, Haidt, Clore, & Jordan, 2008). Now suppose you are faced with the same choice for yourself: to spray (defile) or to shock (harm)? In a series of experiments, we found participants to be more resistant to harming another person, but more resistant to defiling themselves (Dungan, Chakroff, Wu, & Young, in prep). For example, one group of participants sorted a series of harmful and impure outcomes from most preferred to least preferred, while a different group of participants sorted the same outcomes for a friend. © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 595 Again, participants did not want to cause harm to others (and also judged harming others to be more wrong than defiling others). By contrast, participants did not want to defile themselves (and also judged defiling versus harming themselves to be more wrong). In addition, when asked to recount instances of their own past violations, participants reported their own harmful actions as impacting others more and purity violations as impacting themselves more. If one key difference between harm versus purity violations is that harms are typically directed at others, and purity violations typically impact the self (see also Gray & Keeney, in prep, for how harm and purity violations may differ along the dimension of atypicality), then two further predictions follow. First, intent should matter less for self-directed versus otherdirected actions whether the action is harmful or impure (e.g., cutting own/another’s arm with knife; smearing feces on own/another’s arm). Second, other-directed actions should elicit more anger, and self-directed actions should elicit more disgust, again, whether the action is harmful or impure. We tested both of these predictions in a 2 (intentional vs. accidental) 2 (harmful vs. impure) 2 (other-directed vs. self-directed) design (Chakroff, Dungan, & Young, submitted). In support of the first prediction, intent mattered more for judgments of harmful versus impure actions and also, importantly, more for other-directed versus selfdirected actions. In line with the second prediction, harmful actions and other-directed actions elicited more anger, whereas impure actions and self-directed actions elicited more disgust. Participants were also more likely to deliver harsh “moral character” judgments in the case of self-directed actions (e.g., defiling oneself reflects poorly on one’s moral character) and harsh “moral action” judgments in the case of other-directed actions (e.g., hurting someone else is an immoral act; cf. Pizarro & Tannenbaum, 2011; Tannenbaum et al., 2011). Consistent with this overall pattern, other work reveals that participants’ moral judgments of suicide (the ultimate self-directed harm) are uniquely correlated with their (1) endorsement of purity morals as measured by the Moral Foundations Questionnaire (MFQ; Graham et al., 2009, 2011), (2) self-reported ratings of disgust in response to fabricated obituaries of people who committed suicide, and (3) judgments that the people who committed suicide had tainted the purity of their souls (Rottman, Kelemen, & Young, submitted). By contrast, moral judgments of homicide (the ultimate other-directed harm) are uniquely correlated with participants’ endorsement of harm morals on the MFQ. A follow-up experiment also revealed moral judgments of suicide to be uncorrelated with any judgment of harm – including judgments that people in the obituaries had harmed themselves, harmed other people, or even harmed God. Moral judgments of suicide were correlated only with purity judgments. Notably, although conservative, religious participants were more likely to judge suicide as immoral overall, compared to liberal, secular participants, the observed patterns emerged robustly even in participants who reported being both liberal and secular. In brief, participants perceived suicide, an extreme instance of self-harm, to be morally wrong insofar as they perceived suicide to be a purity violation. We do note that, broadly speaking, self-directed violations may be understood within the framework of dyadic morality (Gray, Schein, & Ward, submitted; Gray, Young, & Waytz, 2012) insofar as the self itself may be dyadic; for example, our present self can punish our past self or reward our future self (see the section “The Dyad Within Us” in Gray, Waytz, & Young, 2012). We also note though that the dyadic self may nevertheless be qualitatively distinct from the standard interpersonal dyad. We explore evidence for these ideas in greater detail in the final section below. Finally, before we conclude this section, we suggest that while we have focused on connecting Harm toOther-related cognition and Purity to Self-related cognition in the HOPS model, ongoing and future work aims to extend HOPS more broadly to accommodate other targets (e.g., ingroups and outgroups) and also other domains (e.g., loyalty, authority, justice, © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 596 When Mental States Matter for Morality fairness). For now, we make two suggestions. First, concerns about one’s self may track with concerns about one’s group (ingroup). In the same set of experiments described above (Dungan, Chakroff, Wu, & Young, in prep), participants judged purity violations within one’s group more harshly and harm violations outside one’s group more harshly (cf. Schmidt, Rakoczy, & Tomasello, 2012). The guiding hypothesis here is that keeping yourself pure will do you good only insofar as the others around you are also pure; in other words, concerns about contagion or contamination apply more to ingroup than outgroup members. Second, other concerns beyond purity may also apply more to the ingroup. In particular, we may assign greater moral weight to “binding” values such as loyalty within our own group. That is, we may prefer loyal friends and family, whereas we may value justice and fairness within but also, importantly, beyond our group. Recent work investigating whistle-blowing decisions directly reveals the tension between norms concerning loyalty (to friends and family who support the self) and norms concerning justice and fairness (Waytz, Dungan, & Young, submitted). To summarize the key points of this section, we propose distinct functions (other versus self) for distinct moral domains (harm versus purity): it is wrong to harm others, and it is wrong to defile the self. We consider mental states more when evaluating interpersonal harms. By contrast, we are not as likely to consider (our own) mental states when evaluating self-directed purity violations (see Figure 2). Indeed, when we aim to interpret or evaluate the actions of other people, including harmful actions, we benefit from understanding their beliefs and intentions. Rarely, though, as we will propose in the next section, do we need to interpret or evaluate our own actions in the same way – and, perhaps as a direct result, we are less often in the position of monitoring and updating our model of our own mental states across moral and non-moral contexts alike. In brief, key differences (at the levels of function and mechanism) between harm and purity norms may arise because of differences between selfrelated and other-related cognition outside the moral domain, the focus of the next section. The Challenge of Thinking about One’s Own Thoughts The previous sections presented evidence showing a reduced role for mental states in the case of purity or self-directed violations, compared to harmful or other-directed violations. As suggested above, we have many reasons to figure out what others are thinking and intending in determining “friend or foe”, whereas we are not often in the position of needing to represent our own mental states. Are we less likely to base moral judgments of our own actions on information about our own mental states compared to when we evaluate others? Are we less likely to spontaneously construct representations of our own mental states? Are we worse at retrieving such representations? In this final section, we propose preliminary answers to these questions. This final section focuses on mental state reasoning in moral and non-moral contexts, to establish deeper cognitive roots for the links proposed in the previous sections: a reduced role for intent for self-related cognition and an enhanced role for intent for other-related cognition. We think about other minds in order to make predictions about people’s future behavior and judgments about their past behavior. From carrying on simple conversations to evaluating accidents, social interaction and moral evaluation require us to process, in an ongoing fashion, other people’s mental states. Does she understand what I’m telling her? Is he interested in hearing more? Did she realize that my feelings got hurt? Will he misunderstand this message? As such, mental state reasoning allows us to interact with other moral agents and to identify the agents with whomwe even want to interact in the first place; indeed, mental state reasoning may even be in primary service to moral cognition (Young &Waytz, in press). Yet while we think many thoughts, including thoughts about what others may be thinking, do we need to reason about © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 597 our ownminds to the same extent? We suggest not. We suggest that we may hold many beliefs without representing those beliefs qua beliefs (Malle et al., 2000). For instance, when you spoon sugar into your coffee, you hold the belief “this is sugar”, although you are unlikely to attribute to yourself that belief (e.g., “I believe that this is sugar”). An exceptional case may be one where you start feeling ill and, consequently, youmay reflect on the (false) beliefs of your own past self, “While I thought it was sugar, perhaps it was poison”. We suggest that such reasoning about one’s own mental states occurs infrequently and typically when we discover our beliefs to be false and/or to lead to bad consequences. By contrast, social interaction with and moral evaluation of other people depend crucially on our ability to update our mental models of their mental states. Is there any evidence for reduced or even impaired reasoning about one’s own mental states versus others’ mental states? For example, when people judge their own actions, versus others’ actions, are they more or less likely to use mental state information? One possibility is that mental states will actually exert a greater influence on judgments of the self. After all, we have access to our own minds in ways that other people (luckily) do not. Conversely, we often rely on indirect approaches (i.e., inferences) when thinking about other minds. A second possibility is that we do not have special access to our own mental states; we must infer our own thoughts just as we infer the thoughts of others based on external, observable behavior (Bem, 1972). Indeed, developmental evidence shows that children are unable to reason about their own (past) false beliefs until they learn to reason about others’ (current) false beliefs (Atance & O’Neill, 2004). Thus, we may be equally bad (or good) at thinking about our own thoughts and others’ thoughts. A third possibility, as suggested above, is that judgments of the self reflect a reduced (though not absent) role for mental states – in other words, we are worse at thinking about our own thoughts. To disambiguate between these possibilities, we presented hypothetical moral scenarios describing intentional versus accidental harms written in the second and third person (Cushman & Young, unpub.). Some participants read scenarios written in the second person, featuring the participant as the actor (e.g., “You accidentally/intentionally hit your neighbor’s dog”), while other participants read scenarios in the third person (e.g., “Susan accidentally/intentionally hit her neighbor’s dog”). Participants judged their own and others’ intentional harms similarly harshly, but they judged their own accidents more harshly than others’ accidents. Although more work is needed, this pattern suggests preliminarily that participants may have focused more on what they themselves did (the bad outcome) than on what they meant to do (their innocent intent). Recent fMRI research also suggests that taking different perspectives onmoral actions (e.g., harming someone versus being harmed) rely on distinct but interacting neural networks (Decety & Porges, 2011). At first glance, these results seem to contradict research on Fundamental Attribution Error (FAE; Heider, 1958; Jones & Harris, 1967): we should be more lenient on ourselves as long as we attribute our behavior to external circumstances (e.g., “the roads were slick”, “the dog came out of nowhere”) rather than internal character (e.g., “I am a reckless driver”). And indeed if and when we are forced to defend or justify ourselves to others we may begin to attribute our behavior to external, attenuating circumstances. However, while FAE research distinguishes between environmental (external) factors and stable (internal) traits, we focus on neither. We focus instead on transient mental states, such as beliefs (which are internal) about the situation (which is external; Malle, 1999, 2004; Malle et al., 2000; Young & Saxe, 2010). The preliminary evidence indicates that mental state information may exert less of an influence on moral judgments of the self. But are we actually worse at representing our own mental states? To test this prediction, we designed an fMRI study to measure people’s reasoning about their own versus others’ mental states in a non-moral context (Gweon, Young, & Saxe, © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 598 When Mental States Matter for Morality 2011). In the “Self” version of the task, participants viewed a series of pictures; some of these pictures were designed to be misleading: participants would be misled about the object in the picture when seeing only a part of the picture. After completing a word association task for the partially occluded pictures, full pictures were revealed one after the other. Participants then saw all previous pictures along with a set of new, previously unseen pictures; participants were instructed to think back to their belief about the object during the word association task and to mark whether they were right or wrong in the word association task, or whether the picture was new. A different group of subjects participated in the “Other” version of the task; these participants first saw the pictures revealed one after the other. Afterwards, they were told that another person who had not seen the full pictures would perform the word association task, and that they (the participants) would see the other person’s response on their screen. Subsequently, as participants viewed the previous pictures along with new pictures, participants were told to think back to the other person’s belief and to indicate whether the other person was right or wrong, or whether the picture was new. If people are worse at thinking about their own thoughts, “Self” participants should perform worse than “Other” participants, which is precisely what we found. Participants were less accurate at retrieving their own beliefs versus other people’s beliefs. This difference in accuracy emerged regardless of whether the beliefs were initially false or true, suggesting that impression management (e.g., “I was right”) could not account for the full pattern. Importantly, “Self” participants were not less accurate than “Other” participants in general (i.e., for identifying the new pictures). In addition, explicit assessments of one’s own past beliefs (e.g., “was I right or wrong?”) elicited higher activation in the RTPJ than assessments of pictures as new, suggesting that simply thinking thoughts (in response to new stimuli) does not rely on the same neural mechanisms as thinking about our own past thoughts (when we are instructed to do so by an experimenter) or thinking about other people’s thoughts (which we may do spontaneously; Gweon, Young, & Saxe, 2011). Meanwhile, similar neural patterns (enhanced activity in the RTPJ) are associated with thinking about the thoughts of our own past selves and thinking about the thoughts of other people, hinting at the possibility that we consider our past selves much like we consider other people. Here we return to the notion introduced earlier that self-directed violations including, importantly, purity violations, may be understood within dyadic morality (Gray, Young and Waytz, 2012) insofar as the self itself is also dyadic (see the section “The Dyad Within Us” in Gray, Waytz, & Young, 2012). Extensive work reveals that the self can be both an agent and a patient (as we have argued to be the case for many purity violations). Indeed, prior research shows that we perceive our past and future selves from an observer-like perspective, but we perceive our present selves from an actor-like perspective (Pronin & Ross, 2006; but see Quoidbach, Gilbert, & Wilson, 2013 for asymmetric perceptions of past and future selves). Moreover, the tendency to think about our future selves like other people is also associated with an unwillingness to make short-term sacrifices (in the present moment) for long-term (future) well-being (Bartels & Rips, 2010; Ersner-Hershfield, Wimmer, & Knutson, 2009). Neuroimaging evidence provides further support for this notion: participants who thought about how much they would enjoy an event in the future versus in the present showed less activity in brain regions implicated in self-referential processing, including the ventral MPFC (Mitchell et al., 2011). On the flip side, when people are made to feel psychologically connected to their future selves (Bartels & Urminsky, 2011; Ersner-Hershfield et al., 2009), or when their future selves are otherwise made salient (Bryan & Hershfield, 2011), people show greater self-control, reflecting greater regard for their future selves. Finally, priming people to feel responsible to their future selves just as they might feel responsible for close others (e.g., family members) also leads to the same effect – more © 2013 John Wiley & Sons Ltd Social and Personality Psychology Compass 7/8 (2013): 585–604, 10.1111/spc3.12044 When Mental States Matter for Morality 599 “moral” treatment of the future self (Bryan & Hershfield, 2011). To link these findings back to the work discussed above: when we do engage in mental state reasoning for ourselves, we may do so mostly in cases where we see ourselves as dyadic – we may think about our past self (e.g., we should have known better) as we think about another person. Mental state reasoning for self and others may therefore rely on the same neural machinery. Nevertheless, because we don’t typically need to engage in mental state reasoning for ourselves, we may be worse at it – leading to the observed differences between self-related and other-related cognition as well as between harm and purity norms. In sum, even though we may have introspective access to the contents of our own minds, in the moment, we may not spontaneously construct explicit representations of our current mental states, and therefore, we may be worse at retrieving these states after the fact. This may also relate to how we make think about (and treat) our past and future selves. Representing mental states may serve a special function – to help us understand the external actions of other moral agents in terms of their unobservable mental states, so that we can explain, predict, and evaluate their actions.
منابع مشابه
Diagnostic and therapeutic challenges for dermatologists: What shall we do when we don’t know what to do?
What shall we do when we have done everything we could for the diagnosis and treatment of a patient, but were not successful? What shall we do when there is no definite treatment for a patient? What shall we do when we have no diagnosis or treatment for a patient? Some useful suggestions are presented here to get rid of these situations.
متن کاملفعال کردن بیماران افسرده
Extract: Leave me alone, I want to sleep, thanks, I don’t go to group meeting soon, what I said hadn't any importance for others, "No I don’t want to go to occupational therapy", I can't do any work and if make any things its vain probably, picnic? No I'm so tired and prefer to be here and watch the TV". Psychiatric wards nurses faced to these responses when they invite their patients to partic...
متن کاملWhat is information? with an Emphasis on Zins’s Views
Purpose: The aim of this paper is to explore some important views and theories of information and knowledge in the area of library and information science (LIS) and then to propose a new model of information and knowledge. Method: This research has been conducted by using a comparative content analysis of the existing materials on conceptualizations and definitions of information and knowledge...
متن کاملAfrican Jurisprudence: The Law as a Complement to Public Morality
Every society is governed by certain rules (the law), customs, norms and values; and these are intricately crucial to the maintenance of public morality. Invariably, there is a public morality which provides the cement of any human society; the law, especially the criminal law, must regard it as a primary function to reflect and maintain this public morality. Criminal Codes lay down various off...
متن کاملThe Problem of Evil in the Thought of Jean-Jacques Rousseau
Throughout all of Rousseau’s works there is tension between reason and conscience. Reason binds men when they think correctly, but divides them when they place it at the service of self-interest. Conversely, the universality of conscience is immediate and transparent: it transmits the truth of the existence of God and of the universal principles that underlie human action, despite the differenc...
متن کاملModerate Morality: An Interest-Based Contractarian Defense & its Applied Result in the Political Ethics of International Relations
What is morality’s scope? Should all our actions be evaluated morally? Is it necessary to be causally responsible for harm to have a responsibility to reduce it? Is there a morally relevant distinction between those consequences of our actions which we intend or do and those which we foresee but do not intend or allow but do not do? Is helping others a matter of supererogation (i.e. beyond the ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013